Current Issue : October - December Volume : 2012 Issue Number : 4 Articles : 5 Articles
In our previous work, we proposed wavelet shrinkage estimation (WSE) for nonhomogeneous Poisson process (NHPP)-based\r\nsoftware reliability models (SRMs), where WSE is a data-transform-based nonparametric estimation method. Among many\r\nvariance-stabilizing data transformations, the Anscombe transform and the Fisz transform were employed. We have shown that\r\nit could provide higher goodness-of-fit performance than the conventional maximum likelihood estimation (MLE) and the\r\nleast squares estimation (LSE) in many cases, in spite of its non-parametric nature, through numerical experiments with real\r\nsoftware-fault count data.With the aim of improving the estimation accuracy ofWSE, in this paper we introduce other three data\r\ntransformations to preprocess the software-fault count data and investigate the influence of different data transformations to the\r\nestimation accuracy ofWSE through goodness-of-fit test...
We have proposed a detection method of fault-prone modules based on the spam filtering technique, ââ?¬Å?Fault-prone filtering.ââ?¬Â Faultprone\r\nfiltering is a method which uses the text classifier (spam filter) to classify source code modules in software. In this study, we\r\npropose an extension to use warning messages of a static code analyzer instead of raw source code. Since such warnings include\r\nuseful information to detect faults, it is expected to improve the accuracy of fault-prone module prediction. From the result of\r\nexperiment, it is found that warning messages of a static code analyzer are a good source of fault-prone filtering as the original\r\nsource code.Moreover, it is discovered that it is more effective than the conventional method (that is, without static code analyzer)\r\nto raise the coverage rate of actual faulty modules...
The size and complexity of industrial strength software systems are constantly increasing. This means that the task of managing a\r\nlarge software project is becoming even more challenging, especially in light of high turnover of experienced personnel. Software\r\nclustering approaches can help with the task of understanding large, complex software systems by automatically decomposing them\r\ninto smaller, easier-to-manage subsystems. The main objective of this paper is to identify important research directions in the area\r\nof software clustering that require further attention in order to develop more effective and efficient clustering methodologies for\r\nsoftware engineering. To that end, we first present the state of the art in software clustering research. We discuss the clustering\r\nmethods that have received the most attention from the research community and outline their strengths and weaknesses. Our\r\npaper describes each phase of a clustering algorithm separately.We also present the most important approaches for evaluating the\r\neffectiveness of software clustering....
In this research, a hybrid cost estimation model is proposed to produce a realistic prediction model that takes into consideration\r\nsoftware project, product, process, and environmental elements. A cost estimation dataset is built from a large number of open\r\nsource projects. Those projects are divided into three domains: communication, finance, and game projects. Several data mining\r\ntechniques are used to classify software projects in terms of their development complexity. Data mining techniques are also used\r\nto study association between different software attributes and their relation to cost estimation. Results showed that finance metrics\r\nare usually the most complex in terms of code size and some other complexity metrics. Results showed also that games applications\r\nhave higher values of the SLOCmath, coupling, cyclomatic complexity, and MCDC metrics. Information gain is used in order to\r\nevaluate the ability of object-oriented metrics to predict software complexity. MCDC metric is shown to be the first metric in\r\ndeciding a software project complexity. A software project effort equation is created based on clustering and based on all software\r\nprojects� attributes. According to the software metrics weights values developed in this project, we can notice that MCDC, LOC,\r\nand cyclomatic complexity of the traditional metrics are still the dominant metrics that affect our classification process, while\r\nnumber of children and depth of inheritance are the dominant from the object-oriented metrics as a second level....
Exploratory testing (ET) is popular, especially among agile development teams. In this paper, we study the team aspect in the\r\nET context and explore how to use ET in team sessions to complement other testing activities. The goal was to define a team\r\nexploratory testing (TET) session approach and to provide evidence that the approach is worth using. A TET session approach is\r\ndefined by means of parameters, roles, and process. Also, instructions for using the approach are given. The team is the key factor\r\nthat gives the approach its value and distinguishes it from basic ET. The team enables greater access to expertise, experience, and\r\ninformation. The TET session approach enables participants with different professional background to join in the session. The\r\nsessions may be focused on different purposes; they can contribute to finding defects or learning the system, for example. With\r\ncareful parameter definition, the approach�s risks are mitigated....
Loading....